| Parameter | Value | Variable |
|---|---|---|
| Series Code | 21864 | ts |
| Maximum Lag | 48 | lag.max |
| Prevision Horizon | 12 | n.ahead |
| Unit Root Test | ADF, drift, 11, AIC, 5pct | ur.test |
library(BETS)
data = BETS.get(code)
info <- BETS.search(code = ts, view = F)
print(params$teste)
## NULL
| Code | Description | Periodicity | Start | Source | Unit | NA |
|---|---|---|---|---|---|---|
| 21864 | Physical Production - Intermediate goods | Index | 2002.1 | 01/01/2002 | mar/2017 | IBGE |
## library(mFilter)
trend = fitted(hpfilter(data))
library(dygraphs)
dygraph(cbind(Series = data, Trend = trend), main = info[,"Description"]) %>%
dyRangeSelector(strokeColor = "gray", fillColor = "gray") %>%
dyAxis("y", label = info[,"Unit"])
test.params = append(list(y = data), ur.test)
df = do.call(BETS.ur_test,test.params)
df$results
## statistic crit.val rej.H0
## tau2 -1.539410 -2.88 no
## phi1 1.229531 4.63 yes
For a 95% confidence interval, the test statistic tau3 is smaller than the critical value. We therefore conclude that there is no unit root.
This test will be performed for lag 12, that is, the frequency of the series 21864.
library(forecast)
s_roots = nsdiffs(data)
print(s_roots)
## [1] 0
According to the OCSB test, there is no seasonal unit root, at least at a 5% significance level.
d_ts <- data
BETS.corrgram(d_ts, lag.max = lag.max, mode = "bartlett", knit = T)
BETS.corrgram(d_ts, lag.max = lag.max, mode = "simple", type = "partial", knit = T)
The correlograms from last section gives us enough information to try to identify the underlying SARIMA model parameters. We can confirm our guess by running the auto.arima function from the package forecast. By default, this function uses the AICc (Akaike Information Criterion with Finite Sample Correction) for model selection. Here, we are going to use BIC (Bayesian Information Criterion), in which the penalty term for the number of parameters in the model is larger than in AIC.
model = auto.arima(data, ic = "bic", test = tolower(ur.test$mode))
summary(model)
## Series: data
## ARIMA(3,0,1)(1,0,0)[12] with non-zero mean
##
## Coefficients:
## ar1 ar2 ar3 ma1 sar1 mean
## 1.3701 -0.1013 -0.3325 -0.6919 0.8481 90.8723
## s.e. 0.1431 0.1668 0.0698 0.1430 0.0360 4.7822
##
## sigma^2 estimated as 7.37: log likelihood=-454.74
## AIC=923.47 AICc=924.1 BIC=946.05
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 0.1649182 2.670634 2.037619 0.1003209 2.178236 0.4665688
## ACF1
## Training set -0.02589043
We see that, according to BIC, the best model is a ARIMA(3,0,1)(1,0,0)[12] with non-zero mean .
BETS.predict(model,h=n.ahead, main = info[,"Description"], ylab = info[,"Unit"], knit = T)